The article presents EntropyLong, a novel training method for long-context language models that utilizes predictive uncertainty to ensure the quality of long-range dependencies. By identifying high-entropy positions and retrieving relevant contexts, the approach constructs training samples that significantly improve model performance on tasks requiring distant information, as demonstrated through extensive evaluations.